7 research outputs found

    A compromise between reductionism and non-reductionism

    Get PDF
    This paper investigates the seeming incompatibility of reductionism and non-reductionism in the context of complexity sciences. I review algorithmic information theory for this purpose. I offer two physical metaphors to form a better understanding of algorithmic complexity, and I briefly discuss its advantages, shortcomings and applications. Then, I revisit the non-reductionist approaches in philosophy of mind which are often arguments from ignorance to counter physicalism. A new approach called mild non-reductionism is proposed which reconciliates the necessities of acknowledging irreducibility found in complex systems, and maintaining physicalism. © 2007 by World Scientific Publishing Co. Pte. Ltd. All rights reserved

    Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search

    Full text link
    Universal induction relies on some general search procedure that is doomed to be inefficient. One possibility to achieve both generality and efficiency is to specialize this procedure w.r.t. any given narrow task. However, complete specialization that implies direct mapping from the task parameters to solutions (discriminative models) without search is not always possible. In this paper, partial specialization of general search is considered in the form of genetic algorithms (GAs) with a specialized crossover operator. We perform a feasibility study of this idea implementing such an operator in the form of a deep feedforward neural network. GAs with trainable crossover operators are compared with the result of complete specialization, which is also represented as a deep neural network. Experimental results show that specialized GAs can be more efficient than both general GAs and discriminative models.Comment: AGI 2017 procedding, The final publication is available at link.springer.co

    Diverse consequences of algorithmic probability

    Get PDF
    We reminisce and discuss applications of algorithmic probability to a wide range of problems in artificial intelligence, philosophy and technological society. We propose that Solomonoff has effectively axiomatized the field of artificial intelligence, therefore establishing it as a rigorous scientific discipline. We also relate to our own work in incremental machine learning and philosophy of complexity. © 2013 Springer-Verlag Berlin Heidelberg

    The Unbearable Shallow Understanding of Deep Learning

    No full text
    corecore